We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE) fail to satisfy basic properties like continuity. We present a rigorous framework for analyzing calibration measures, inspired by the literature on property testing. We propose a ground-truth notion of distance from calibration: the $\ell_1$ distance to the nearest perfectly calibrated predictor. We define a consistent calibration measure as one that is a polynomial factor approximation to the this distance. Applying our framework, we identify three calibration measures that are consistent and can be estimated efficiently: smooth calibration, interval calibration, and Laplace kernel calibration. The former two give quadratic approximations to the ground truth distance, which we show is information-theoretically optimal. Our work thus establishes fundamental lower and upper bounds on measuring distance to calibration, and also provides theoretical justification for preferring certain metrics (like Laplace kernel calibration) in practice.
translated by 谷歌翻译
We present a new perspective on loss minimization and the recent notion of Omniprediction through the lens of Outcome Indistingusihability. For a collection of losses and hypothesis class, omniprediction requires that a predictor provide a loss-minimization guarantee simultaneously for every loss in the collection compared to the best (loss-specific) hypothesis in the class. We present a generic template to learn predictors satisfying a guarantee we call Loss Outcome Indistinguishability. For a set of statistical tests--based on a collection of losses and hypothesis class--a predictor is Loss OI if it is indistinguishable (according to the tests) from Nature's true probabilities over outcomes. By design, Loss OI implies omniprediction in a direct and intuitive manner. We simplify Loss OI further, decomposing it into a calibration condition plus multiaccuracy for a class of functions derived from the loss and hypothesis classes. By careful analysis of this class, we give efficient constructions of omnipredictors for interesting classes of loss functions, including non-convex losses. This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration. We show that calibrated multiaccuracy implies Loss OI for the important set of convex losses arising from Generalized Linear Models, without requiring full multicalibration. For such losses, we show an equivalence between our computational notion of Loss OI and a geometric notion of indistinguishability, formulated as Pythagorean theorems in the associated Bregman divergence. We give an efficient algorithm for calibrated multiaccuracy with computational complexity comparable to that of multiaccuracy. In all, calibrated multiaccuracy offers an interesting tradeoff point between efficiency and generality in the omniprediction landscape.
translated by 谷歌翻译
作为算法公平性的概念,多核算已被证明是一个强大而多才多艺的概念,其含义远远超出了其最初的意图。这个严格的概念 - 预测在丰富的相交子群中得到了很好的校准 - 以成本为代价提供了强大的保证:学习成型预测指标的计算和样本复杂性很高,并且随着类标签的数量而成倍增长。相比之下,可以更有效地实现多辅助性的放松概念,但是,仅假设单独使用多学历,就无法保证许多最可取的多核能概念。这种紧张局势提出了一个关键问题:我们能否以多核式式保证来学习预测因素,以与多审核级相称?在这项工作中,我们定义并启动了低度多核的研究。低度的多核净化定义了越来越强大的多组公平性概念的层次结构,这些概念跨越了多辅助性和极端的多核电的原始表述。我们的主要技术贡献表明,与公平性和准确性有关的多核算的关键特性实际上表现为低级性质。重要的是,我们表明,低度的数学振动可以比完整的多核电更有效。在多级设置中,实现低度多核的样品复杂性在完整的多核电上呈指数级(在类中)提高。我们的工作提供了令人信服的证据,表明低度多核能代表了一个最佳位置,将计算和样品效率配对,并提供了强大的公平性和准确性保证。
translated by 谷歌翻译
当动作集具有良好的曲率时,我们在任何线性匪徒算法产生的设计矩阵的特征矩阵上介绍了一个非呈现的下限。具体而言,我们表明,每当算法的预期累积后悔为$ o(\ sqrt {n})$时,预期设计矩阵的最低特征值将随着$ \ omega(\ sqrt {n})$的增长而生长为$ n $是学习范围,动作空间在最佳臂周围具有恒定的Hessian。这表明,这种作用空间在离散(即分离良好的)动作空间中迫使多项式下限而不是对数下限,如\ cite {lattimore2017end}所示。此外,虽然先前的结果仅在渐近方案(如$ n \ to \ infty $)中保留,但我们对这些``本地富裕的''动作空间的结果随时都在。此外,在温和的技术假设下,我们以高概率获得了对最小本本特征值的相似下限。我们将结果应用于两个实用的方案 - \ emph {model selection}和\ emph {clustering}在线性匪徒中。对于模型选择,我们表明了一个基于时期的线性匪徒算法适应了真实模型的复杂性,以时代数量的速率指数,借助我们的新频谱结合。对于聚类,我们考虑了一个多代理框架,我们通过利用光谱结果,该框架来证明该框架,该框架,该框架,该框架通过光谱结果,该频谱结果,该框架的结果,该频谱结果,该框架的结果,该频谱结果该框架,该框架的结果不需要强制探索 - 代理商可以运行线性匪徒算法并立即估算其基本参数,从而产生低遗憾。
translated by 谷歌翻译
我们考虑一个不当的强化学习设置,在该设置中,为学习者提供了$ M $的基本控制器,以进行未知的马尔可夫决策过程,并希望最佳地结合它们,以生产一个可能胜过每个基本基础的控制器。这对于在不匹配或模拟环境中学习的跨控制器进行调整可能很有用,可以为给定的目标环境获得良好的控制器,而试验相对较少。在此方面,我们提出了两种算法:(1)一种基于政策梯度的方法; (2)可以根据可用信息在基于简单的参与者(AC)方案和天然参与者(NAC)方案之间切换的算法。两种算法都在给定控制器的一类不当混合物上运行。对于第一种情况,我们得出融合率保证,假设访问梯度甲骨文。对于基于AC的方法,我们提供了基本AC案例中的固定点的收敛速率保证,并在NAC情况下为全球最优值提供了保证。 (i)稳定卡特柱的标准控制理论基准的数值结果; (ii)一个受约束的排队任务表明,即使可以使用的基本策略不稳定,我们的不当政策优化算法也可以稳定系统。
translated by 谷歌翻译
Q-learning and SARSA(0) with $\epsilon$-greedy exploration are leading reinforcement learning methods, and their tabular forms converge to the optimal Q-function under reasonable conditions. However, with function approximation, these methods exhibit strange behaviors, e.g., policy oscillation and chattering, convergence to different attractors (possibly even the worst policy) on different runs, etc., apart from the usual instability. Accordingly, a theory to explain these phenomena has been a long-standing open problem, even for basic linear function approximation (Sutton, 1999). Our work uses differential inclusion theory to provide the first framework for resolving this problem. We further illustrate via numerical examples how this framework helps explain these algorithms' asymptotic behaviors.
translated by 谷歌翻译
我们重新审视混合技术的方法,也称为拉普拉斯法,以研究通用指数家族中的浓度现象。将与家族的对数分区功能相关的Bregman差异的性质与超级木制混合物的方法相关联,我们建立了一个通用的结合,以控制家族参数与参数的有限样本估算之间的Bregman差异。我们的界限是时间均匀的,并且看起来很大,将经典信息增益扩展到指数式家庭,我们称之为Bregman信息收益。对于从业者而言,我们实例化了这本小说绑定到几个古典家庭,例如高斯,伯努利,指数,威布尔,帕雷托,帕尔托,泊松和卡方和卡方,从而产生了置信度的明确形式和布雷格曼信息的收益。我们从数值上进一步将所得的置信度界限与最先进的替代方案进行比较,以使其均匀浓度,并表明这种新颖的方法会产生竞争结果。最后,我们强调了集中界对某些说明性应用的好处。
translated by 谷歌翻译
我们解决了联合学习(FL-HPO)的超参数优化(HPO)的相对未开发的问题。我们引入联邦损失表面聚合(Flora),该框架的第一个FL-HPO解决方案框架可以解决除了在流体文献中通常寻址的随机梯度下降/神经网络之外的表格数据和梯度提升训练算法的用例。该框架使单次FL-HPO能够首先识别**单次**培训中使用的良好的超参数集。因此,与没有HPO的FL训练相比,它使FL-HPO解决方案具有最小的额外通信开销。我们对七个OpenML数据集的梯度提升决策树Flora的实证评估表明,对所考虑的基线,以及越来越多的涉及FL-HPO培训的各方的鲁棒性,可以显着的模型准确性。
translated by 谷歌翻译
果蝇的嗅觉电路中的数学形式化是作为地区敏感散列(Flyhash)和绽放过滤器(FBF)的果蝇,并为各种机器学习任务(如相似度搜索,异常值检测)“重新编程”和文本嵌入。我们提出了一种新颖的对该散列的重新编程和盛开的过滤器,以模拟规范最近邻分类器(NNC)在具有挑战性的联邦学习(FL)设置,其中培训和测试数据跨各方传播,并且没有数据可以留下各自的各方。具体而言,我们利用Flyhash和FBF来创建Flynn分类器,以及理论上建立Flynn匹配NNC的条件。我们展示了Flynn如何在FL设置中培训,具有低通信开销,以产生FlynNFL,以及如何差异私密。经验上,我们证明(i)Flynn与70个OpenML数据集匹配NNC精度,(ii)Flynnfl训练具有低通信开销的高度可扩展,提供高达$ 16 $派对的$ 8 \倍。
translated by 谷歌翻译
数值天气预报(NWP)和机器学习(ML)方法对于太阳能预测是流行的。但是,NWP模型具有多种可能的物理参数化,其需要特定于站点的NWP优化。当区域NWP模型与具有不同可能的参数化的全球气候模型一起使用时,这进一步复杂化。在该研究中,提出了一种替代方法,并评估了四种辐射模型。天气研究和预测(WRF)模型在全球和区域模式中运行,以提供太阳能辐照度的估计。然后使用ML后处理该估计以提供最终预测。该ML误差校正模型,来自WRF的归一化根均方误差高达40-50%。使用CAM,GFDL,新戈达德和RRTMG辐射模型获得的结果在此校正后可比,否定了WRF参数化调整的需求。还评估了包含附近地点和传感器数据的其他模型,后者是特别有前途的。
translated by 谷歌翻译